3,842 research outputs found

    Higgs-μ\mu-τ\tau Coupling at High and Low Energy Colliders

    Full text link
    There is no tree-level flavor changing neutral current (FCNC) in the standard model (SM) which contains only one Higgs doublet. If more Higgs doublets are introduced for various reasons, the tree level FCNC would be inevitable except extra symmetry was imposed. Therefore FCNC processes are the excellent probes for the physics beyond the SM (BSM). In this paper, we studied the lepton flavor violated (LFV) decay processes hμτh\rightarrow\mu\tau and τμγ\tau\rightarrow\mu\gamma induced by Higgs-μ\mu-τ\tau vertex. For τμγ\tau\rightarrow\mu\gamma, its branching ratio is also related to the httˉht\bar{t}, hτ+τh\tau^+\tau^- and hW+WhW^+W^- vertices. We categorized the BSM into two scenarios for the Higgs coupling strengths near or away from SM. For the latter scenario, we took the spontaneously broken two Higgs doublet model (Lee model) as an example. We considered the constraints by recent data from LHC and B factories, and found that the measurements gave weak constraints. At LHC Run II, hμτh\rightarrow\mu\tau will be confirmed or set stricter limit on its branching ratio. Accordingly, Br(τμγ)O(1010108)\textrm{Br}(\tau\rightarrow\mu\gamma)\lesssim\mathcal{O}(10^{-10}-10^{-8}) for general chosen parameters. For the positive case, τμγ\tau\rightarrow\mu\gamma can be discovered with O(1010)\mathcal{O}(10^{10}) τ\tau pair samples at SuperB factory, Super τ\tau-charm factory and new Z-factory. The future measurements for Br(hμτ)\textrm{Br}(h\rightarrow\mu\tau) and Br(τμγ)\textrm{Br}(\tau\rightarrow\mu\gamma) will be used to distinguish these two scenarios or set strict constraints on the correlations among different Higgs couplings, please see Table II in the text for details.Comment: 18 pages, 10 figures, 2 table; more references added; more discussions about cancellation in the amplitude added accoeding to the referee's suggestion

    Testing the homogeneity of the Universe using gamma-ray bursts

    Full text link
    In this paper, we study the homogeneity of the GRB distribution using a subsample of the Greiner GRB catalogue, which contains 314 objects with redshift 0<z<2.50<z<2.5 (244 of them discovered by the Swift GRB Mission). We try to reconcile the dilemma between the new observations and the current theory of structure formation and growth. To test the results against the possible biases in redshift determination and the incompleteness of the Greiner sample, we also apply our analysis to the 244 GRBs discovered by Swift and the subsample presented by the Swift Gamma-Ray Burst Host Galaxy Legacy Survey (SHOALS). The real space two-point correlation function (2PCF) of GRBs, ξ(r),\xi(r), is calculated using a Landy-Szalay estimator. We perform a standard least-χ2\chi^2 fit to the measured 2PCFs of GRBs. We use the best-fit 2PCF to deduce a recently defined homogeneity scale. The homogeneity scale, RHR_H, is defined as the comoving radius of the sphere inside which the number of GRBs N(<r)N(<r) is proportional to r3r^3 within 1%1\%, or equivalently above which the correlation dimension of the sample D2D_2 is within 1%1\% of D2=3D_2=3. For the Swift subsample of 244 GRBs, the correlation length and slope are r0=387.51±132.75 h1r_0= 387.51 \pm 132.75~h^{-1}Mpc and γ=1.57±0.65\gamma = 1.57\pm 0.65 (at 1σ1\sigma confidence level). The corresponding scale for a homogeneous distribution of GRBs is r7,700 h1r\geq 7,700~h^{-1}Mpc. The results help to alleviate the tension between the new discovery of the excess clustering of GRBs and the cosmological principle of large-scale homogeneity. It implies that very massive structures in the relatively local Universe do not necessarily violate the cosmological principle and could conceivably be present.Comment: 7 pages, 5 figures, accepted by Astronomy & Astrophysics. The data used in this work (e.g. Tables 1 and 2) are publicly available online in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A

    Space-efficient data sketching algorithms for network applications

    Get PDF
    Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.PhDCommittee Chair: Xu, Jun; Committee Member: Feamster, Nick; Committee Member: Li, Baochun; Committee Member: Romberg, Justin; Committee Member: Zegura, Ellen W

    Efficient Optimization of Performance Measures by Classifier Adaptation

    Full text link
    In practical applications, machine learning algorithms are often needed to learn classifiers that optimize domain specific performance measures. Previously, the research has focused on learning the needed classifier in isolation, yet learning nonlinear classifier for nonlinear and nonsmooth performance measures is still hard. In this paper, rather than learning the needed classifier by optimizing specific performance measure directly, we circumvent this problem by proposing a novel two-step approach called as CAPO, namely to first train nonlinear auxiliary classifiers with existing learning methods, and then to adapt auxiliary classifiers for specific performance measures. In the first step, auxiliary classifiers can be obtained efficiently by taking off-the-shelf learning algorithms. For the second step, we show that the classifier adaptation problem can be reduced to a quadratic program problem, which is similar to linear SVMperf and can be efficiently solved. By exploiting nonlinear auxiliary classifiers, CAPO can generate nonlinear classifier which optimizes a large variety of performance measures including all the performance measure based on the contingency table and AUC, whilst keeping high computational efficiency. Empirical studies show that CAPO is effective and of high computational efficiency, and even it is more efficient than linear SVMperf.Comment: 30 pages, 5 figures, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, 201
    corecore